66 research outputs found

    VeRi3D: Generative Vertex-based Radiance Fields for 3D Controllable Human Image Synthesis

    Full text link
    Unsupervised learning of 3D-aware generative adversarial networks has lately made much progress. Some recent work demonstrates promising results of learning human generative models using neural articulated radiance fields, yet their generalization ability and controllability lag behind parametric human models, i.e., they do not perform well when generalizing to novel pose/shape and are not part controllable. To solve these problems, we propose VeRi3D, a generative human vertex-based radiance field parameterized by vertices of the parametric human template, SMPL. We map each 3D point to the local coordinate system defined on its neighboring vertices, and use the corresponding vertex feature and local coordinates for mapping it to color and density values. We demonstrate that our simple approach allows for generating photorealistic human images with free control over camera pose, human pose, shape, as well as enabling part-level editing

    Deep Generative Models on 3D Representations: A Survey

    Full text link
    Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task

    Learning Interpretable BEV Based VIO without Deep Neural Networks

    Full text link
    Monocular visual-inertial odometry (VIO) is a critical problem in robotics and autonomous driving. Traditional methods solve this problem based on filtering or optimization. While being fully interpretable, they rely on manual interference and empirical parameter tuning. On the other hand, learning-based approaches allow for end-to-end training but require a large number of training data to learn millions of parameters. However, the non-interpretable and heavy models hinder the generalization ability. In this paper, we propose a fully differentiable, and interpretable, bird-eye-view (BEV) based VIO model for robots with local planar motion that can be trained without deep neural networks. Specifically, we first adopt Unscented Kalman Filter as a differentiable layer to predict the pitch and roll, where the covariance matrices of noise are learned to filter out the noise of the IMU raw data. Second, the refined pitch and roll are adopted to retrieve a gravity-aligned BEV image of each frame using differentiable camera projection. Finally, a differentiable pose estimator is utilized to estimate the remaining 3 DoF poses between the BEV frames: leading to a 5 DoF pose estimation. Our method allows for learning the covariance matrices end-to-end supervised by the pose estimation loss, demonstrating superior performance to empirical baselines. Experimental results on synthetic and real-world datasets demonstrate that our simple approach is competitive with state-of-the-art methods and generalizes well on unseen scenes

    DORec: Decomposed Object Reconstruction Utilizing 2D Self-Supervised Features

    Full text link
    Decomposing a target object from a complex background while reconstructing is challenging. Most approaches acquire the perception for object instances through the use of manual labels, but the annotation procedure is costly. The recent advancements in 2D self-supervised learning have brought new prospects to object-aware representation, yet it remains unclear how to leverage such noisy 2D features for clean decomposition. In this paper, we propose a Decomposed Object Reconstruction (DORec) network based on neural implicit representations. Our key idea is to transfer 2D self-supervised features into masks of two levels of granularity to supervise the decomposition, including a binary mask to indicate the foreground regions and a K-cluster mask to indicate the semantically similar regions. These two masks are complementary to each other and lead to robust decomposition. Experimental results show the superiority of DORec in segmenting and reconstructing the foreground object on various datasets

    The Insulation Properties of Oil-Impregnated Insulation Paper Reinforced with Nano-TiO 2

    Get PDF
    Oil-impregnated insulation paper has been widely used in transformers because of its low cost and desirable physical and electrical properties. However, research to improve the insulation properties of oil-impregnated insulation paper is rarely found. In this paper, nano-TiO2 was used to stick to the surface of cellulose which was used to make insulation paper. After oil-impregnated insulation paper reinforced by nano-TiO2 was prepared, the tensile strength, breakdown strength, and dielectric properties of the oil-impregnated insulation paper were investigated to determine whether the modified paper had a better insulation performance. The results show that there were no major changes in tensile strength, and the value of the breakdown strength was greatly improved from 51.13 kV/mm to 61.78 kV/mm. Also, the values of the relative dielectric constant, the dielectric loss, and conductivity declined. The discussion reveals that nano-TiO2 plays a major role in the phenomenon. Because of the existence of nano-TiO2, the contact interface of cellulose and oil was changed, and a large number of shallow traps were produced. These shallow traps changed the insulation properties of oil-impregnated insulation paper. The results show that the proposed solution offers a new method to improve the properties of oil-impregnated insulation paper
    corecore